Discrete Laplace: dlaplace (two-sided geometric)#
The discrete Laplace distribution is an integer-valued analogue of the continuous Laplace (\u201cdouble exponential\u201d): it is symmetric, sharply peaked at its location, and has exponentially decaying tails.
In SciPy it appears as scipy.stats.dlaplace.
Learning goals#
understand what
dlaplacemodels and when it is usefulwrite down the PMF and CDF and connect common parameterizations
derive mean/variance and the maximum-likelihood estimator (MLE)
sample from the distribution with NumPy only
visualize PMF/CDF and validate formulas by Monte Carlo
Notebook roadmap#
Title & classification
Intuition & motivation
Formal definition (PMF/CDF)
Moments & properties
Parameter interpretation
Derivations (expectation, variance, likelihood)
Sampling & simulation (NumPy-only)
Visualization (PMF/CDF + Monte Carlo)
SciPy integration (
scipy.stats.dlaplace)Statistical use cases
Pitfalls
Summary
Prerequisites#
basic probability (PMF/CDF, expectation)
comfort with geometric series
familiarity with
numpyarrays and plotting
import numpy as np
import plotly.express as px
import plotly.graph_objects as go
import os
import plotly.io as pio
from scipy.stats import chi2, dlaplace, fit
pio.templates.default = "plotly_white"
pio.renderers.default = os.environ.get("PLOTLY_RENDERER", "notebook")
np.set_printoptions(precision=4, suppress=True)
SEED = 7
rng = np.random.default_rng(SEED)
1) Title & Classification#
Distribution name: dlaplace (Discrete Laplace / two-sided geometric)
Type: Discrete
Support (standard form): (k \in \mathbb{Z})
SciPy includes a shift parameter loc, so the support becomes a shifted integer lattice:
[ k \in \mathrm{loc} + \mathbb{Z}. ]
Parameter space (SciPy):
shape
a > 0location
loc \u2208 \u211d(practically: integer shifts keep the support integer-valued)
We\u2019ll also use the reparameterization
[ q = e^{-a} \in (0, 1), ]
which acts like a tail ratio: (P(|K|=m+1)/P(|K|=m)=q) for (m\ge 0).
2) Intuition & Motivation#
dlaplace models symmetric integer-valued noise: deviations of size 1 are common, size 10 are possible but much rarer, and probabilities drop exponentially with distance from the center.
What it models#
integer measurement error (quantization/rounding + heavy tails)
net count changes (difference of two nonnegative counts)
robust integer residuals (\u201cmostly small, occasionally big\u201d)
Typical real-world use cases#
Differential privacy for integer queries (the \u201cgeometric mechanism\u201d): add noise with PMF (\propto e^{-\varepsilon |k|}).
Robust modeling of rounded residuals: negative log-likelihood is proportional to (|k-\mathrm{loc}|), mirroring the (\ell_1) loss.
Signal processing on integers: priors/penalties on integer differences that encourage sparsity in changes.
Relations to other distributions#
Continuous Laplace is the difference of two i.i.d. exponentials; discrete Laplace is the difference of two i.i.d. geometric random variables.
It is also called the two-sided geometric distribution.
With small
a(so (q=e^{-a}\approx 1)) tails are heavy; with largeatails die off quickly and mass concentrates atloc.
3) Formal Definition#
Let (a>0) and define (q=e^{-a}\in(0,1)). Write
[ c = \tanh(a/2) = \frac{1-q}{1+q}. ]
PMF#
For (K\in \mathbb{Z}) (standard form, loc=0), the probability mass function is
[ P(K=k) = f(k) = \tanh(a/2),\exp\bigl(-a|k|\bigr) = \frac{1-q}{1+q},q^{|k|},\qquad k\in\mathbb{Z}. ]
With a location shift loc, SciPy uses
[ P(K=k\mid a,\mathrm{loc}) = \tanh(a/2),\exp\bigl(-a|k-\mathrm{loc}|\bigr),\qquad k\in \mathrm{loc}+\mathbb{Z}. ]
CDF (standard form, loc=0)#
Using geometric-series sums, the CDF has a simple closed form. For integer (k):
[ F(k)=P(K\le k)= \begin{cases} \dfrac{q^{-k}}{1+q}, & k\le -1,\ 1-\dfrac{q^{k+1}}{1+q}, & k\ge 0. \end{cases} ]
For a shifted distribution, replace (k) by (k-\mathrm{loc}) (and remember a discrete CDF is stepwise constant between lattice points).
def _check_a(a: float) -> float:
a = float(a)
if not np.isfinite(a) or a <= 0:
raise ValueError("`a` must be a positive, finite number.")
return a
def dlaplace_pmf(k, a: float, loc=0):
"""PMF of the discrete Laplace distribution in SciPy's parameterization.
Notes
-----
- The PMF is defined on the shifted integer lattice: k in loc + Z.
- If k-loc is not (approximately) an integer, the PMF is 0.
"""
a = _check_a(a)
k = np.asarray(k)
x = k - loc
x_round = np.round(x)
is_int = np.isclose(x, x_round)
c = np.tanh(a / 2.0)
out = np.where(is_int, c * np.exp(-a * np.abs(x_round)), 0.0)
return float(out) if out.ndim == 0 else out
def dlaplace_logpmf(k, a: float, loc=0):
a = _check_a(a)
k = np.asarray(k)
x = k - loc
x_round = np.round(x)
is_int = np.isclose(x, x_round)
logc = np.log(np.tanh(a / 2.0))
out = np.where(is_int, logc - a * np.abs(x_round), -np.inf)
return float(out) if out.ndim == 0 else out
def dlaplace_cdf(k, a: float, loc=0):
"""CDF in closed form (stepwise constant between lattice points)."""
a = _check_a(a)
q = np.exp(-a)
k = np.asarray(k)
x = np.floor(k - loc) # makes the function well-defined for non-integer inputs
out = np.where(x >= 0, 1.0 - (q ** (x + 1)) / (1.0 + q), (q ** (-x)) / (1.0 + q))
return float(out) if out.ndim == 0 else out
def dlaplace_moments(a: float, loc=0):
"""Return mean, variance, skewness, excess kurtosis, entropy (nats)."""
a = _check_a(a)
q = np.exp(-a)
mean = float(loc)
var = 2.0 * q / (1.0 - q) ** 2
skew = 0.0
kurt_excess = np.cosh(a) + 2.0
entropy = -np.log(np.tanh(a / 2.0)) + a / np.sinh(a)
return mean, var, skew, kurt_excess, entropy
def dlaplace_mgf(t, a: float, loc=0):
"""MGF M(t) = E[e^{tK}] for |t| < a."""
a = _check_a(a)
t = np.asarray(t, dtype=float)
q = np.exp(-a)
denom = (1.0 - q * np.exp(t)) * (1.0 - q * np.exp(-t))
core = (1.0 - q) ** 2 / denom
out = np.exp(t * loc) * core
return float(out) if out.ndim == 0 else out
def dlaplace_cf(w, a: float, loc=0):
"""Characteristic function phi(w) = E[e^{i w K}] (exists for all real w)."""
a = _check_a(a)
w = np.asarray(w, dtype=float)
q = np.exp(-a)
denom = 1.0 - 2.0 * q * np.cos(w) + q**2
core = (1.0 - q) ** 2 / denom
out = np.exp(1j * w * loc) * core
return out
def dlaplace_rvs_numpy(a: float, loc=0, size=1, rng: np.random.Generator | None = None):
"""Sample from dlaplace using NumPy only (difference of two geometrics)."""
a = _check_a(a)
if rng is None:
rng = np.random.default_rng()
q = np.exp(-a)
p = 1.0 - q
g1 = rng.geometric(p, size=size) - 1 # 0-based geometric: P(G=k)=(1-p)^k p
g2 = rng.geometric(p, size=size) - 1
return loc + (g1 - g2)
# Quick sanity checks
a = 0.8
loc = 0
k = np.arange(-200, 201)
pmf = dlaplace_pmf(k, a, loc=loc)
mass_in_range = pmf.sum()
# Tail mass outside [-N, N] has a closed form: P(|K| > N) = 2 q^{N+1} / (1+q)
q = np.exp(-a)
N = 200
tail_mass = 2 * (q ** (N + 1)) / (1 + q)
print(f"Mass in [-{N}, {N}] = {mass_in_range:.12f}")
print(f"Tail mass outside range = {tail_mass:.12e}")
print(f"Mass + tail (should be 1) = {mass_in_range + tail_mass:.12f}")
# CDF consistency: F(k) - F(k-1) = P(K=k)
Fk = dlaplace_cdf(k, a, loc=loc)
diff = Fk[1:] - Fk[:-1]
max_err = np.max(np.abs(diff - pmf[1:]))
print(f"Max |(F(k)-F(k-1)) - pmf(k)| over grid: {max_err:.3e}")
Mass in [-200, 200] = 1.000000000000
Tail mass outside range = 2.019809144837e-70
Mass + tail (should be 1) = 1.000000000000
Max |(F(k)-F(k-1)) - pmf(k)| over grid: 1.073e-16
4) Moments & Properties#
Let (q=e^{-a}). In standard form (loc=0):
Mean: (\mathbb{E}[K]=0) (symmetry). With location: (\mathbb{E}[K]=\mathrm{loc}).
Variance: [ \mathrm{Var}(K)=\mathbb{E}[K^2]=\frac{2q}{(1-q)^2}=\frac{2e^{-a}}{(1-e^{-a})^2}. ]
Skewness: 0
(Excess) kurtosis: [ \gamma_2=\frac{\mathbb{E}[(K-\mu)^4]}{\sigma^4}-3 = \cosh(a)+2. ]
MGF and characteristic function#
MGF (M(t)=\mathbb{E}[e^{tK}]) exists for (|t|<a) and equals [ M(t)=\frac{(1-q)^2}{(1-qe^{t})(1-qe^{-t})}. ]
Characteristic function (\varphi(\omega)=\mathbb{E}[e^{i\omega K}]) exists for all real (\omega): [ \varphi(\omega)=\frac{(1-q)^2}{1-2q\cos\omega+q^2}. ]
Entropy#
The (Shannon) entropy in nats is [ H(K) = -\sum_k p(k)\log p(k) = -\log\tanh(a/2) + \frac{a}{\sinh(a)}. ]
def sample_moments(x: np.ndarray):
x = np.asarray(x, dtype=float)
m = x.mean()
c = x - m
v = np.mean(c**2)
skew = np.mean(c**3) / (v ** 1.5)
kurt_excess = np.mean(c**4) / (v**2) - 3.0
return m, v, skew, kurt_excess
a = 0.8
loc = 2
mean_f, var_f, skew_f, kurt_f, ent_f = dlaplace_moments(a, loc=loc)
mean_s, var_s, skew_s, kurt_s = dlaplace.stats(a, loc=loc, moments="mvsk")
ent_s = dlaplace.entropy(a, loc=loc)
x = dlaplace_rvs_numpy(a, loc=loc, size=200_000, rng=rng)
mean_mc, var_mc, skew_mc, kurt_mc = sample_moments(x)
print("Formula moments:")
print(" mean, var, skew, kurt_excess =", (mean_f, var_f, skew_f, kurt_f))
print(f" entropy (nats) = {ent_f:.6f}")
print("\nSciPy moments:")
print(" mean, var, skew, kurt_excess =", (float(mean_s), float(var_s), float(skew_s), float(kurt_s)))
print(f" entropy (nats) = {float(ent_s):.6f}")
print("\nMonte Carlo (n=200k):")
print(" mean, var, skew, kurt_excess =", (mean_mc, var_mc, skew_mc, kurt_mc))
Formula moments:
mean, var, skew, kurt_excess = (2.0, 2.9635341891843727, 0.0, 3.3374349463048447)
entropy (nats) = 1.868512
SciPy moments:
mean, var, skew, kurt_excess = (2.0, 2.963534189184372, 0.0, 3.3374349463048434)
entropy (nats) = 1.868512
Monte Carlo (n=200k):
mean, var, skew, kurt_excess = (2.0022, 2.9261851600000006, -0.002202117614788409, 3.2796668981421906)
5) Parameter Interpretation#
Shape parameter a (tail decay)#
In the PMF (p(k)\propto e^{-a|k-\mathrm{loc}|}), the parameter a is a decay rate:
larger
a\u2192 faster decay \u2192 distribution becomes sharply concentrated atlocsmaller
a\u2192 slower decay \u2192 heavier tails and larger variance
Using (q=e^{-a}), the tail ratio is [ \frac{P(|K-\mathrm{loc}|=m+1)}{P(|K-\mathrm{loc}|=m)} = q. ]
Location parameter loc#
loc shifts the distribution left/right without changing its shape. For integer-valued modeling, loc is typically chosen as an integer (a shifted lattice).
k = np.arange(-20, 21)
a_values = [0.4, 0.8, 1.6]
fig = go.Figure()
for a in a_values:
fig.add_trace(
go.Scatter(
x=k,
y=dlaplace_pmf(k, a),
mode="lines+markers",
name=f"a={a}",
)
)
fig.update_layout(
title="PMF of dlaplace for different a (loc=0)",
xaxis_title="k",
yaxis_title="P(K=k)",
width=800,
height=430,
)
fig.show()
k = np.arange(-20, 21)
a_values = [0.4, 0.8, 1.6]
fig = go.Figure()
for a in a_values:
fig.add_trace(
go.Scatter(
x=k,
y=dlaplace_cdf(k, a),
mode="lines",
line_shape="hv",
name=f"a={a}",
)
)
fig.update_layout(
title="CDF of dlaplace for different a (loc=0)",
xaxis_title="k",
yaxis_title="F(k)=P(K\u2264k)",
width=800,
height=430,
)
fig.show()
6) Derivations#
Expectation#
In standard form (loc=0), the PMF is symmetric: (p(k)=p(-k)). Therefore
[ \mathbb{E}[K] = \sum_{k\in\mathbb{Z}} k,p(k) = 0 ]
because each (k) term cancels with the (-k) term. With a location shift, (\mathbb{E}[K]=\mathrm{loc}).
Variance#
Let (q=e^{-a}) and (c=(1-q)/(1+q)). Then
[ \mathrm{Var}(K)=\mathbb{E}[K^2]=2c\sum_{k=1}^{\infty} k^2 q^k. ]
Using the standard geometric-series identity
[ \sum_{k=1}^{\infty} k^2 q^k = \frac{q(1+q)}{(1-q)^3},\qquad |q|<1, ]
we get
[ \mathbb{E}[K^2] = 2,\frac{1-q}{1+q},\frac{q(1+q)}{(1-q)^3} = \frac{2q}{(1-q)^2}. ]
Likelihood and MLE#
For observations (x_1,\dots,x_n\in\mathrm{loc}+\mathbb{Z}), the log-likelihood is
[ \ell(a,\mathrm{loc}) = \sum_{i=1}^n \log p(x_i\mid a,\mathrm{loc}) = n,\log\tanh(a/2) - a\sum_{i=1}^n |x_i-\mathrm{loc}|. ]
For fixed
a, maximizing (\ell) overlocis equivalent to minimizing (\sum_i |x_i-\mathrm{loc}|), so any median is an MLE ofloc.For fixed
loc, differentiate with respect toa: [ \frac{\partial}{\partial a} \log\tanh(a/2) = \frac{1}{\sinh(a)}. ] Setting the score to zero yields the closed-form MLE [ \hat a = \operatorname{asinh}!\left(\frac{n}{\sum_i |x_i-\mathrm{loc}|}\right). ] If all observations equallocthen the sum in the denominator is zero and the likelihood increases as (a\to\infty) (degenerate spike atloc).
def dlaplace_loglik(data: np.ndarray, a: float, loc=0) -> float:
a = _check_a(a)
data = np.asarray(data)
return data.size * np.log(np.tanh(a / 2.0)) - a * np.sum(np.abs(data - loc))
def dlaplace_mle_closed_form(data: np.ndarray):
"""Closed-form MLE under an integer-location constraint."""
data = np.asarray(data)
if data.ndim != 1:
raise ValueError("data must be 1D")
x = np.sort(data)
n = x.size
# Any median minimizes sum |x_i - loc|. For even n, choose the lower median.
loc_hat = x[(n - 1) // 2]
S = np.sum(np.abs(x - loc_hat))
if S == 0:
a_hat = np.inf
else:
a_hat = np.arcsinh(n / S)
return float(a_hat), float(loc_hat)
# Demo: estimate parameters from simulated data
a_true = 0.9
loc_true = -3
data = dlaplace_rvs_numpy(a_true, loc=loc_true, size=3_000, rng=rng)
a_hat, loc_hat = dlaplace_mle_closed_form(data)
print("True (a, loc):", (a_true, loc_true))
print("MLE (a, loc):", (a_hat, loc_hat))
True (a, loc): (0.9, -3)
MLE (a, loc): (0.8925832494794965, -3.0)
7) Sampling & Simulation (NumPy-only)#
A very convenient representation is:
If (G_1,G_2) are i.i.d. geometric random variables on ({0,1,2,\dots}) with
[
P(G=g)=(1-q)q^g,\qquad q=e^{-a},
]
then
[
K = G_1 - G_2
]
has
[
P(K=k)=\frac{1-q}{1+q} q^{|k|},
]
which is exactly the dlaplace PMF (standard form).
So we can sample by drawing two geometrics and subtracting them. The function dlaplace_rvs_numpy above implements this in a vectorized way.
8) Visualization (PMF/CDF + Monte Carlo)#
Below we compare the theoretical PMF to a Monte Carlo estimate from NumPy-only sampling.
a = 0.8
loc = 0
n = 80_000
x = dlaplace_rvs_numpy(a, loc=loc, size=n, rng=rng)
k = np.arange(-25, 26)
pmf_theory = dlaplace_pmf(k, a, loc=loc)
# Empirical PMF on the grid (probability outside grid is ignored for plotting)
counts = np.array([(x == ki).mean() for ki in k])
fig = go.Figure()
fig.add_trace(go.Bar(x=k, y=counts, name="Monte Carlo", opacity=0.65))
fig.add_trace(go.Scatter(x=k, y=pmf_theory, mode="lines+markers", name="Theory"))
fig.update_layout(
title=f"PMF: Monte Carlo vs theory (a={a}, loc={loc}, n={n:,})",
xaxis_title="k",
yaxis_title="P(K=k)",
width=850,
height=440,
)
fig.show()
9) SciPy Integration#
SciPy provides scipy.stats.dlaplace with the usual discrete-distribution API:
dlaplace.pmf(k, a, loc=0)dlaplace.cdf(k, a, loc=0)dlaplace.rvs(a, loc=0, size=..., random_state=...)
For fitting parameters, dlaplace itself does not expose a .fit(...) method (unlike many continuous distributions). In SciPy\u00a01.15+, you can use the generic function scipy.stats.fit to perform MLE subject to bounds (and integer constraints where applicable).
a = 0.8
loc = 2
k = np.arange(-5, 6)
pmf_np = dlaplace_pmf(k, a, loc=loc)
pmf_sp = dlaplace.pmf(k, a, loc=loc)
cdf_np = dlaplace_cdf(k, a, loc=loc)
cdf_sp = dlaplace.cdf(k, a, loc=loc)
print("Max |pmf_numpy - pmf_scipy|:", float(np.max(np.abs(pmf_np - pmf_sp))))
print("Max |cdf_numpy - cdf_scipy|:", float(np.max(np.abs(cdf_np - cdf_sp))))
# SciPy sampling
data = dlaplace.rvs(a, loc=loc, size=2_000, random_state=rng)
# Closed-form MLE
a_hat_cf, loc_hat_cf = dlaplace_mle_closed_form(data)
# SciPy's generic fitter: must provide bounds to let loc vary (otherwise it's fixed at 0 by default).
bounds = {
"a": (1e-6, 10.0),
"loc": (int(data.min()) - 10, int(data.max()) + 10),
}
res = fit(dlaplace, data, bounds=bounds, method="mle")
print("\nTrue (a, loc):", (a, loc))
print("Closed-form MLE:", (a_hat_cf, loc_hat_cf))
print("SciPy fit params:", (float(res.params.a), float(res.params.loc)))
Max |pmf_numpy - pmf_scipy|: 0.0
Max |cdf_numpy - cdf_scipy|: 6.938893903907228e-18
True (a, loc): (0.8, 2)
Closed-form MLE: (0.8189853250010287, 2.0)
SciPy fit params: (0.8189852974582236, 2.0)
10) Statistical Use Cases#
A) Hypothesis testing (example: tail parameter)#
Because the likelihood is available in closed form, you can build likelihood-ratio tests for a (often with loc fixed/known).
B) Bayesian modeling (robust integer noise)#
If [ Y_i = \theta + \varepsilon_i,\qquad \varepsilon_i\sim\text{dlaplace}(a,,\mathrm{loc}=0), ] and you put a flat prior over integer (\theta), then the posterior satisfies [ \log p(\theta\mid y) = \text{const} - a\sum_i |y_i-\theta|, ] so the MAP estimate is (any) median of the observations. This is the discrete counterpart of the well-known (\ell_1) connection for the continuous Laplace.
C) Generative modeling (integer-valued heavy-tailed noise)#
Use dlaplace as an emission/noise model when your observations are integers and you expect occasional large deviations (e.g., perturbed counts, rounded sensor data, integer residuals).
# Likelihood-ratio test example: H0: a = a0 vs H1: a free (loc known)
a0 = 0.8
a_true = 1.1
loc = 0
n = 800
data = dlaplace_rvs_numpy(a_true, loc=loc, size=n, rng=rng)
S = np.sum(np.abs(data - loc))
a_hat = np.inf if S == 0 else np.arcsinh(n / S)
ll_hat = dlaplace_loglik(data, a_hat, loc=loc)
ll_0 = dlaplace_loglik(data, a0, loc=loc)
LR = 2.0 * (ll_hat - ll_0)
p_value = chi2.sf(LR, df=1)
print(f"a_true={a_true}, a0={a0}, a_hat={a_hat:.4f}")
print(f"LR statistic={LR:.3f}, approx p-value={p_value:.4f}")
print("(Asymptotic chi-square calibration is approximate; check via simulation for small n.)")
a_true=1.1, a0=0.8, a_hat=1.0802
LR statistic=72.877, approx p-value=0.0000
(Asymptotic chi-square calibration is approximate; check via simulation for small n.)
# Bayesian modeling demo: posterior over an integer location parameter theta
a = 0.9
theta_true = 4
n = 50
y = dlaplace_rvs_numpy(a, loc=theta_true, size=n, rng=rng)
theta_grid = np.arange(int(y.min()) - 10, int(y.max()) + 11)
log_post = np.array([-a * np.sum(np.abs(y - th)) for th in theta_grid], dtype=float)
log_post -= log_post.max()
post = np.exp(log_post)
post /= post.sum()
theta_map = theta_grid[np.argmax(post)]
theta_median = np.sort(y)[(n - 1) // 2]
print("theta_true:", theta_true)
print("MAP theta:", int(theta_map))
print("sample median:", int(theta_median))
fig = px.line(
x=theta_grid,
y=post,
title="Posterior over integer location (flat prior; a known)",
labels={"x": "theta", "y": "posterior probability"},
)
fig.add_vline(x=theta_true, line_dash="dash", line_color="black", annotation_text="true")
fig.add_vline(x=theta_map, line_dash="dot", line_color="red", annotation_text="MAP")
fig.update_layout(width=850, height=420)
fig.show()
theta_true: 4
MAP theta: 4
sample median: 4
# Generative modeling / differential privacy-style demo: noisy release of a count
true_count = 250
a_values = [0.5, 1.0, 2.0] # larger a => less noise
n = 40_000
fig = go.Figure()
xs = np.arange(true_count - 40, true_count + 41)
for a in a_values:
noise = dlaplace_rvs_numpy(a, loc=0, size=n, rng=rng)
released = true_count + noise
emp = np.array([(released == x).mean() for x in xs])
fig.add_trace(go.Scatter(x=xs, y=emp, mode="lines", name=f"a={a}"))
fig.add_vline(x=true_count, line_dash="dash", line_color="black")
fig.update_layout(
title="Noisy count release: empirical PMF for different a",
xaxis_title="released count",
yaxis_title="probability",
width=900,
height=430,
)
fig.show()
11) Pitfalls#
Invalid parameters:
amust be strictly positive. Asa \u2192 0+, tails become extremely heavy and moments blow up.Location lattice: the distribution lives on
loc + Z. For most integer-data use cases, chooselocas an integer.Underflow in tails: for large (|k|),
pmf(k, ...)can underflow to 0. Uselogpmf(or work in log-space) for likelihood computations.MGF domain: the MGF exists only for (|t|<a). Near (\pm a) the denominator becomes ill-conditioned.
Parameterization confusion: some sources use
q \in (0,1)directly (two-sided geometric). SciPy usesa>0withq=e^{-a}.
12) Summary#
dlaplaceis a symmetric discrete distribution on (\mathbb{Z}) (orloc + Z) with exponentially decaying tails.PMF: (p(k)=\tanh(a/2),e^{-a|k-\mathrm{loc}|}) with
a>0.A convenient reparameterization is (q=e^{-a}), giving (p(k)=\frac{1-q}{1+q}q^{|k-\mathrm{loc}|}).
Mean =
loc, variance (2q/(1-q)^2), skewness 0, excess kurtosis (\cosh(a)+2), entropy (-\log\tanh(a/2)+a/\sinh(a)).Sampling (NumPy-only): draw two 0-based geometrics with (q=e^{-a}) and subtract: (K=G_1-G_2+\mathrm{loc}).
For inference:
locMLE is a median;aMLE is (\operatorname{asinh}(n/\sum|x_i-\mathrm{loc}|)). SciPy providesdlaplaceplus the genericscipy.stats.fitutility for bounded MLE.